专利摘要:
This system (10) comprises: a first sensor (18) for detecting a direction of vision of a first user (14); a computing unit (22) capable of generating a virtual three-dimensional simulation of the environment; virtual, based on the data received from the or each first sensor (18); for at least one second user (16), an immersive restitution set (24) for the virtual three-dimensional simulation generated by the computing unit (22). The system comprises, for the first user (14), a second sensor (20) for detecting the position of a part of a real member of the first user (14), the computing unit (22) is able to create, in the virtual three-dimensional simulation, an avatar of the first user ( 14), comprising a virtual head and a virtual member, reconstructed and oriented relative to one another based on the data of the first sensor (18) and the second sensor (20).
公开号:FR3041804A1
申请号:FR1501977
申请日:2015-09-24
公开日:2017-03-31
发明作者:Igor Fain;Patrice Kurdijian
申请人:Dassault Aviation SA;
IPC主号:
专利说明:

Virtual three-dimensional simulation system capable of generating a virtual environment bringing together a plurality of users and associated method
The present invention relates to a virtual three-dimensional simulation system capable of generating a virtual environment bringing together a plurality of users, comprising: for at least a first user, a first sensor for detecting a direction of vision of the first user; computing unit capable of generating a virtual three-dimensional simulation of the virtual environment, on the basis of the data received from the or each first detection sensor; for at least one second user, a set of immersive restitution of the virtual three-dimensional simulation generated by the computing unit, capable of plunging the or each second user into virtual three-dimensional simulation.
Such a system is intended in particular to be used to organize technical training grouping several users in the same virtual environment.
In particular, the system according to the invention is suitable for grouping the users in a virtual environment reproducing part of an aircraft, in particular for learning and repeating maintenance procedures and / or use of the aircraft.
These procedures usually require successive operations on various equipment in a specific order with defined gestures. Generally, such formations are conducted in a classroom, using two-dimensional media projected on screens, such as presentations including images.
Such presentations are not very representative of the real environment within an aircraft. They provide a theoretical knowledge of the procedure to be performed, but little practical experience. Other formations are conducted directly on an aircraft or on a model of the aircraft, which allows to understand more concretely the procedure to perform. During these trainings, the number of participants who can simultaneously visualize the procedure to be performed must often be limited, especially if the environment is confined, as for example in the technical cargo of an aircraft.
Moreover, these formations require to immobilize an aircraft or to reproduce a representative model of the aircraft, which is expensive and impractical.
In addition, all participants must attend the training at the same time, which can be expensive if participants come from various sites.
It is also known to dive a single user in a virtual three-dimensional environment, for example by providing a helmet own to restore a virtual three-dimensional model. The user perceives the virtual environment, but not necessarily other users, which makes the training very little interactive.
An object of the invention is to provide a three-dimensional simulation system which provides an inexpensive and practical way of offering a very interactive support for interacting between users in a complex platform environment, in particular for forming applications. users to the maintenance and / or use of the complex platform. To this end, the subject of the invention is a system of the aforementioned type, characterized in that the system comprises, for the or each first user, a second sensor for detecting the position of a part of a real member of the first user, the computing unit being able to create, in the virtual three-dimensional simulation, an avatar of the or each first user, comprising at least one virtual head and at least one virtual member, reconstituted and oriented relative to one another. on the basis of the data of the first sensor and the second sensor.
The system according to the invention may comprise one or more of the following characteristics, taken in isolation or in any technically possible combination: the member and the virtual member are arms of the user and the avatar respectively; the part of the user's member detected by the second sensor comprises the hand of the first user; the computing unit is able to determine the position of a first region of the virtual member, on the basis of the data received from the first detection sensor, and is able to determine the position of a second region of the virtual member from data received from the second detection sensor; the computing unit is capable of determining the position of the first region of the virtual member after determining the position of the second region of the virtual member; the computing unit is able to generate a representation of a virtual shoulder of the first user, mobile jointly in rotation around a vertical axis with the virtual head of the first user, the first region of the virtual member extending from from the end of the virtual shoulder; it comprises, for a plurality of first users, a first sensor for detecting a direction of vision of the user, and a second sensor for detecting the position of a part of a member of the user, computing unit being able to create, in virtual three-dimensional simulation, an avatar of each first user, comprising at least one virtual head and at least one virtual member, reconstituted and oriented relative to one another on the basis of data of the first sensor and the second sensor of the first user, the or each restitution set being adapted to selectively show the avatar of one or more first users in the virtual three-dimensional simulation; the computing unit is capable of placing the avatars of a plurality of first users at the same location given in the virtual three-dimensional simulation, the or each restitution set being able to selectively show the avatar of a single first user at the given location; it comprises, for the or each first user, a set of immersive restitution of the virtual three-dimensional simulation generated by the unit capable of plunging the or each first user into the virtual three-dimensional simulation; the restitution assembly is adapted to be worn by the head of the first user, the first sensor and / or the second sensor being mounted on the reproduction assembly; in a given predefined position of the part of a member of the user detected by the second sensor, the computing unit is able to display at least one information and / or selection window in the visible virtual three-dimensional simulation by the or each first user and / or by the or each second user; the calculation unit is able to determine if the position of the part of the real member of the first user detected by the second sensor is physiologically possible and to mask the display of the virtual member of the avatar of the first user if the position of the part of the real member of the first user detected by the second sensor is not physiologically possible; it comprises, for the or each first user, a position sensor, capable of supplying the computing unit with geographical positioning data of the first user. The subject of the invention is also a method of producing a virtual three-dimensional simulation bringing together several users, comprising the following steps: providing a system as described above; activating the first sensor and the second sensor and transmitting the data received from the first sensor and the second sensor to the computing unit; generating a virtual three-dimensional simulation of an avatar of the or each first user, comprising at least a virtual head and at least one virtual member, reconstructed and oriented relative to each other based on the data of the first sensor and the second sensor.
The system according to the invention may comprise one or more of the following characteristics, taken in isolation or in any technically possible combination: the generation of the virtual three-dimensional simulation comprises the loading of a representative model of a platform, and the virtual three-dimensional representation of the virtual environment of a region of the platform, the or each first user moving in the aircraft environment to perform at least one simulation of maintenance operation and / or use of the platform. The invention will be better understood on reading the description which follows, given solely by way of example, and with reference to the appended drawings, in which: FIG. 1 is a diagrammatic view of a first system of virtual three-dimensional simulation according to the invention; FIG. 2 is a view of the virtual environment created by the simulation system according to the invention, comprising a plurality of representative avatars of several users; - Figures 3 and 4 are enlarged views illustrating the definition of an avatar; FIG. 5 is a view illustrating a step of activating a selection menu within the virtual three-dimensional simulation; FIG. 6 is a view of an indicator for selecting an area or an object in virtual three-dimensional simulation; FIG. 7 is a detailed view of a selection menu; FIG. 8 is a view illustrating the selection of a region of an aircraft in virtual three-dimensional simulation.
A first virtual three-dimensional simulation system 10 according to the invention, capable of generating a virtual environment 12, visible in FIG. 2, bringing together a plurality of users 14, 16 is illustrated in FIG.
The system 10 is intended to be used in particular to simulate a maintenance operation and / or use of a platform, including an aircraft, for example as part of a training.
In this example, at least one first user 14 is able to receive and reproduce information relating to the maintenance and / or use operation, in particular the steps of a maintenance and / or use procedure. At least one second user 16 is a trainer broadcasting the information to each first user 14 and verifying the correct reproduction of the information.
The maintenance and / or use operations comprise for example steps of assembly / disassembly of platform equipment, or stages of testing and / or activation of equipment platform.
In this example, the system 10 comprises, for each user 14, 16, a user position sensor 17, a first sensor 18 for detecting the direction of vision of the user 14, 16, and a second sensor 20. detecting the position of a part of a user's limb 14, 16.
The system 10 further comprises at least one calculating and synchronizing unit 22, able to receive and synchronize the data coming from each sensor 17, 18, 20 and to create a virtual three-dimensional simulation bringing together the users 14, 16 in the virtual environment 12, on the basis of the data of the sensors 17, 18, 20 and on the basis of a three-dimensional model representative of the virtual environment 12. The three-dimensional model is for example a model of at least one area of the platform .
The system 10 further comprises, for each user 14, 16, a set of reproduction 24 of the virtual three-dimensional simulation generated by the computing unit 22 from the point of view of the user 14, 16, to immerse each user. 14, 16 in the virtual environment 12. The reproduction unit 24 is for example a virtual reality headset. It is carried by the user's head 14, 16 with a fixed orientation relative to the user's head. It generally comprises a three-dimensional display system arranged in front of the eyes of the user, including a screen and / or glasses. The reproduction unit 24 is for example an Oculus Rift DK2 type helmet.
The position sensor 17 advantageously comprises at least one element fixed on the reproduction unit 24.
The position sensor 17 is for example a sensor comprising at least one light source, in particular a light-emitting diode, fixed on the reproduction assembly and an optical detector, for example an infrared detector, arranged facing the user to detect the light source. .
Alternatively, the position sensor 17 is an accelerometer gyro, fixed on the reproduction assembly 24, whose data are integrated to give at each moment the position of the user.
The position sensor 17 is capable of providing geographic data for positioning the user, in particular for determining the overall movements of the user's head 14, 16 with respect to a centralized reference point common to all the users 14, 16.
The first detection sensor 18 is able to detect the direction of vision of the user 14, 16.
The first sensor 18 advantageously comprises at least one element fixed on the reproduction assembly 24 to be movable together with the head of the user 14, 16. It is able to follow the direction of vision of the next user at least one vertical axis and at least one horizontal axis, preferably along at least three axes.
It is for example formed by an accelerometer gyro which can be identical, if necessary, to the gyro of the position sensor 17.
As a variant, the first sensor 18 comprises a light source carried by the reproduction assembly 24 and at least one camera, preferably several detection cameras of the light source, the or each camera being fixed facing the user, and may be common with the position sensor 17, if any.
The first sensor 18 is able to produce data in a reference proper to each user 14, 16 which are then transposed according to the invention in the centralized coordinate system, using data from the position sensor 17.
The second sensor 20 is a sensor for detecting at least a portion of a real limb of the user 14, 16. In particular, the limb of the user is an arm, and the second sensor 20 is able to detect the position and orientation of the hand and at least a portion of the forearm of the user 14, 16.
Preferably, the second sensor 20 is able to detect the position and orientation of the two hands and the associated forearms of the user 14, 16.
The second sensor 20 is for example a motion sensor, advantageously operating by infrared detection. The sensor is for example of the "Leap Motion" type.
Alternatively, the second sensor 20 is a camera operating in the visible range, associated with a form recognition software.
Advantageously, the second sensor 20 is also fixed on the reproduction unit 24, to be movable together with the user's head, minimizing the inconvenience to the user.
The detection field of the sensor 20 extends facing the user 14, 16, to maximize the chances of detection of the user's part of the member 14, 16 at any time.
The second sensor 20 is able to produce data in a reference specific to the sensor 20, which are then transposed according to the invention in the reference of the first sensor 18, then in the centralized coordinate system on the basis of the position and the orientation known from the second sensor 20 on the reproduction assembly 24, and data from the position sensor 17 and the first sensor 18.
The data generated by the first sensor 18 and the second sensor 20 are adapted to be transmitted in real time to the computing unit 22, at a frequency for example between 60 Hz and 120 Hz.
Preferably, the reproduction unit 24 is provided with a data transmission system 26 allowing bidirectional communication between the computing unit 22 and the reproduction unit 24 via a transmission means, for example including a USB cable. , to transmit the data of the sensors 17, 18, 20, and to receive from the calculation unit 22, the data necessary for the immersion of the user 14, 16 in the virtual three-dimensional simulation generated by the calculation unit 22. The computing unit 22 comprises at least one processor 30, and at least one memory 32 containing software applications suitable for being executed by the processor 30.
The memory 32 contains in particular an application 34 for loading a three-dimensional model representative of the virtual environment 12 in which the users 14, 16 are intended to be joined, an application 35 for generating the virtual environment 12 on the basis of the three-dimensional model loaded and, according to the invention, an application 36 for creating and positioning, for each user, 14, 16 of an animated avatar 38 in the virtual environment 12.
The memory 32 also contains an application 40 for controlling and selectively restoring the virtual environment 12 and the avatar or avatars 38 of each user 14, 16. The loading application 34 is able to retrieve in computer form a file of three-dimensional model representative of the virtual environment 12 in which the users will be immersed 14, 16.
The three-dimensional model is for example a representative model of a platform, including an aircraft as a whole, or part of the platform. The three-dimensional model comprises, for example, relative positioning and shape data of a frame supporting components and each of the components mounted on the frame. It notably includes data for assigning each component to a functional system (for example a serial number for each component).
The model is generally organized in a computer file, in the form of a model tree derived from a computer-aided design software, this tree being for example organized by type of system (structure, attachment, equipment) . The generation application is able to use the data of the three-dimensional model to generate a virtual three-dimensional representation of the virtual environment 12. The application 36 for creating and positioning animated avatars 38 is able to analyze the position of each user 14, 16 in the virtual environment 12 based on the positioning data of the position sensor 17, and vision direction data, received from the first sensor 18.
According to the invention, the creation and positioning application 36 is able to create, for each user 14, 16, an animated avatar 38 representative of the attitude and positioning of at least one member of the user, particularly of the user. at least one arm of the user and placing each avatar 38 in the virtual environment 12.
In the example illustrated in FIG. 3 and in FIG. 4, the avatar 38 comprises a virtual head 50, mobile as a function of the movements of the user's head 14, 16, measured by the first sensor 18, a trunk virtual 54 connected to the virtual head 50 by a virtual neck 56 and virtual shoulders 58, the virtual trunk 54 and the virtual shoulders 58 being movable jointly in rotation with the virtual head 50. The avatar 38 further comprises two virtual members 59 , each virtual member 59 being movable according to the displacement and orientation of the corresponding member portion of the user detected by the second sensor 20. Each virtual member here comprises a virtual hand 62, a first region 64 and a second region 66 interconnected by a virtual elbow 68.
To create and position the avatar 38, the application 36 comprises a positioning module of the virtual head 50 of the avatar 38, on the basis of the data received from the position sensor 17 and the first sensor 18, a positioning module virtual trunk 54 and virtual shoulders 58 of the avatar 38, according to the positioning data of the virtual head 50, and a positioning module of the virtual members 59 of the user 14, 16 in the virtual environment 12, based in particular on the data of the second sensor 20.
For each user 14, 16, the positioning module of the virtual head 50 is able to use the data of the position sensor 17, to locate the virtual head 50 of the avatar 38 in the virtual environment 12.
The data of the position sensor 17 are re-aligned in a repository common to all the users 14, 16 in the virtual environment 12.
In a first mode of operation, the user avatars 14, 16 are positioned at locations that are distinct from each other, within the virtual environment 12, as illustrated in FIG.
In another mode of operation, the user avatars 14, 16 are positioned with overlap with each other, especially if the virtual environment 12 is confined. In this case, as will be seen below, each user 14, 16 is not able to see all the avatars 38 present in the virtual environment 12 confined.
The positioning module of the virtual head 50 is able to process data from the first sensor 18, to generate in real time, orientation data of the virtual head 50 of the avatar 38 corresponding to the direction of vision measured at from the first sensor 18.
The virtual head of the avatar 38 here has a substantially spherical shape. It comprises a marker representative of the direction of vision, including a block 52 illustrating the position of the eyes of the user, and the restitution assembly 24 placed on the eyes. The orientation of the vision direction of the avatar 38 is possible around at least one vertical axis A-A 'and a horizontal axis B-B', and advantageously along a second horizontal axis C-C '. The avatar 38 is thus not limited in rotation and can move its direction of vision by more than 90 ° on each side of its basic vision direction.
The module for determining the position of the virtual trunk 54 and the shoulders 58 is capable of stalling in real time the position of the virtual trunk 54, also represented by a sphere on the avatar 38 at a predetermined distance from the head 50. This predetermined distance corresponds to the height of the virtual neck 56 of the avatar 38 represented by a cylinder oriented vertically.
The virtual neck 56 is placed vertically at the vertical pivot point of the virtual head 50 about the vertical axis A-A '.
The virtual trunk positioning module 54 and the virtual shoulders 58 is further adapted to fix the angular orientation of the virtual shoulders 58, keeping them in a vertical plane, with a fixed angle with respect to the horizontal, on both sides. on the vertical axis A-A 'of the neck 56.
It is adapted to rotate the plane containing the virtual shoulders 58 together with the virtual head 50 around the vertical axis A-A ', to continuously follow the rotation of the virtual head 50 about the vertical axis A-A .
Thus, the virtual shoulders 58 of the avatar 38 remain fixed in distance and in orientation in their plane relative to the virtual trunk 54, but pivot together with the virtual head 50 about the axis A-A '.
The positioning module of the virtual trunk 54 and the virtual shoulders 58 is furthermore able to define in real time the position of the ends 60 of the virtual shoulders 58, represented here by spheres, which serve as a basis for the construction of the virtual members 59 of Avatar 38, as will be seen below.
The position of the ends 60 is defined by a predetermined distance d1 between the ends 60 and the trunk 54, for example of the order of 20 cm (mean head-to-shoulder distance).
The virtual member positioning module 59 is adapted to receive the data from the second sensor 20 to determine the position and orientation of a portion of each user's member 14,16 in the real world.
In this example, the portion of the user's member 14, 16 detected by the second sensor 20 includes the user's hand, and at least the beginning of the forearm.
The positioning module of the virtual members 59 is able to process the data of the second sensor 20 to readjust the position data coming from the second sensor 20 from the reference frame of the second sensor 20 to the common reference system, in particular by relying on the fixed position of the second sensor 20 on the reproduction assembly 24 and on the data of the position sensor 17 and the first sensor 18.
The virtual member positioning module 59 is capable of generating and positioning an oriented virtual representation of the part of the user's member 14, 16 detected by the second sensor 20, here a virtual hand 62 on the avatar 38 .
The virtual member positioning module 59 is also able to determine the orientation and the position of the second region 66 of each virtual member on the basis of the data received from the second sensor 20. In this example, the second region 66 of the virtual member is the forearm. For this purpose, the virtual limb positioning module 59 is able to determine the orientation of the start of the forearm of the user 14, 16 in the real world, on the basis of the data of the sensor 20, and use this orientation to orient the second region 66 of each virtual limb 59 from the position of the virtual hand 62, the orientation of the beginning of the forearm and a predetermined distance d2 defining the length of the second region 66 between the virtual hand 62 and a virtual elbow 68, for example of the order of 30 cm (average length of the forearm).
Then, once the position of the virtual elbow 68 is known, the virtual member positioning module 59 is able to determine the position and the orientation of the first region 64 of each virtual member between the end 60 of the virtual shoulder 58 , obtained in particular from the data of the first sensor 18, as described above, and the virtual elbow 68.
The positioning module is further able to determine if the position of the virtual hand 62, as obtained from the sensor 20, is physiologically possible. This determination is for example made by determining the distance d3 between the end 60 of the virtual shoulder 58 of the virtual elbow 68 and comparing it to a maximum possible physiological value, for example equal to 45 cm.
Thus, for each user 14, 16, the characteristics and the positioning of an avatar 38 corresponding to the user 14, 16 is created and is defined by the creation and positioning application 36. The avatar 38 follows the orientations General of the head, and the hands of the user 14, 16. The avatar 38 further has animated virtual members 59, whose orientations are close, but not identical to those of the real members of the user 14, 16 in the real world, which simplifies the operation of the system 10, while offering a representative perception of the actual movements of the members.
The definition and position information of each avatar 38 are defined and / or transposed in the common repository and are shared within the computing unit 22.
Each avatar 38 can thus be positioned and oriented in real time in the virtual environment 12. The application 40 for controlling and restoring the virtual environment 12 and the avatars 38 is able to process the data generated by the application of creation and positioning 36 to render a virtual three-dimensional representation representative of the virtual environment 12 and at least one avatar 38 present in this virtual environment 12 in each set of playback 24.
On this basis, the application 40 is able to generate a virtual three-dimensional representation specific to each user 14, 16, which depends on the position of the user 14, 16 in the virtual environment 12, and the direction of vision of the user. the user 14, 16.
The virtual three-dimensional representation specific to each user 14, 16 is adapted to be transmitted in real time to the reproduction unit 24 of the user 14, 16 concerned. For this purpose, the application 40 comprises for each user 14, 16, a control module of the display of the virtual environment 12 and the selective display of one or more avatars 38 of other users 14, 16 in this virtual environment 12, and a partial masking module of the avatar 38 of the user 14, 16 and / or other users 14, 16.
Advantageously, the application 40 further comprises a display module and / or selection of virtual objects in the environment from the avatar 38 of the user 14, 16. The application 40 control and restitution is for example controlled and parameterized only by the second user 16.
The display control module is able to process the data obtained centrally in the computing unit 22 in real time to display, in the reproduction set 24 associated with a given user 14, 16, a three-dimensional representation. virtual virtual environment 12, taken at the position of the user 14, 16, in the direction of vision of the user, as determined by the position sensors 17 and the first sensor 18.
The display control module is furthermore able to display in the virtual three-dimensional representation the avatars 38 of one or more users 14, 16, according to the preferences provided by the second user 16.
In one mode of operation, the control module of the display is able to display for each user 14, all the avatars 38 of other users 14, 16 present in the virtual environment 12.
In another mode of operation, the control module of the display is able to keep hidden the avatar 38 of at least one user 14, 16.
Thus, the second user 16 is able to set the control module of the display to receive as a whole of reproduction 24 only the avatar 38 of a user 14 chosen, without seeing the avatars of other users 14.
This allows for example to isolate one or more users 14, and to exclude other users 14 who advantageously receive a message indicating that they are temporarily excluded from the simulation.
Similarly, the second user 16 is able to control the control module of the display to prevent each first user 14 to see the avatars 38 of other users 14 in their respective sets of refunds, while maintaining the opportunity to observe all users 14 in his own set of restitution 24.
This makes it possible to group a large number of users at the same confined location in the virtual environment 12, avoiding that users are hampered by the avatars 38 of other users 14, 16. This is particularly advantageous with respect to a real environment that could not receive all users 14, 16 in a confined location.
The partial masking module is able to mask the upper part of the user's own avatar 38, 14 in the virtual three-dimensional representation generated by the reproduction unit 24 of this user 14, 16. Thus, the virtual head 50, the virtual shoulders 58 and the virtual neck 56 of the user's own avatar 38 14, 16 are masked in its restitution assembly 24 so as not to create unpleasant sensations due to the different positioning between the virtual shoulders 58 and the real shoulders.
The partial masking module is also able to mask the virtual members 59 of at least one user 14, 16, in the absence of data detected by the second sensors 20 of this user 14, 16, and / or if these data produce virtual hand positions 62 that are not physiologically possible, as described above.
The module for displaying and / or selecting virtual objects is adapted to allow the display of a control menu, in a predefined position of at least a portion of the user's member 14, 16 with respect to the head of the user 14, 16.
The predefined position is for example a particular orientation of the palm of the hand of the user 14, 16 relative to his head, especially when the palm of the hand faces the head. For this purpose, the module for displaying and / or selecting virtual objects is able to determine the angle between a vector perpendicular to the palm of the hand obtained from the second sensor 20 and a second vector extending between the hand and the head.
If this angle is less than a given value, for example at 80 °, which occurs when the palm of the hand approaches the head to face the head (see Figure 5), the display module and / or selection of virtual objects is able to display a selection menu 90 in the virtual environment 12, next to the user's head 14,16.
The module for displaying and / or selecting virtual objects is able to close the selection menu 90 if the aforementioned angle increases beyond the preset value, for a predefined time, for example greater than one second.
The module for displaying and / or selecting virtual objects is adapted to allow the choice of a function 92 in the selection menu 90, by moving a finger of the virtual hand 62 of the avatar 38 on a selection area of the selection menu 90 displayed.
In one variant, the module for displaying and / or selecting virtual objects is adapted to allow the selection of a function 92 in the displayed menu, by performing a ray run. The radius throw consists in maintaining the direction of vision in the reproduction unit 24 to target the function 92 to be selected for a predefined time.
If the direction of vision, as detected by the first sensor 18 targets the area corresponding to the function 92 for a duration greater than a predetermined time, the display module and / or selection of virtual objects is able to select this function. Advantageously, it is suitable for displaying a counter 94, visible in FIG. 6, representative of the aiming time necessary to activate the selection.
The module for displaying and / or selecting virtual objects is also suitable for displaying information corresponding to an element present in the virtual environment 12, for example a part of the aircraft, when this part is selected either by referred to, as described above, either by virtual contact between the virtual hand 62 of the avatar 38 of the user and the piece.
In the example shown in FIG. 7, the module for displaying and / or selecting virtual objects is suitable for displaying a context menu 96 designating the room and a certain number of possible choices C1 to C4 for the user. as to hide the part (C1), to isolate the part (C2), to enlarge the part (C3), or to cancel the selection (C4).
In the variant shown in FIG. 8, the user 16 is able to display a scaled-down model 98 of the platform to select an area 99 of this platform intended to be used as virtual environment 12. The selection is performed as previously, by virtual contact between the virtual hand 62 of the avatar 38 of the user and the model 98 and / or by aim.
Once the selection is made, the virtual environment 12 is modified to display the selected zone 99.
A method for developing and implementing a three-dimensional virtual simulation shared between several users 14, 16 will now be described.
Initially, the virtual simulation system 10 is activated. Each user 14, 16 is equipped with a reproduction unit 24 equipped with a position sensor 17, a first sensor 18 for detecting a direction of vision of the user 14, 16 and a second sensor 20 for detecting the position of a part of a member of the user 14, 16. The computing unit 22 retrieves the data concerning the virtual environment 12 in which the users 14 are intended to be dived virtually, The data comes for example from a digital model of the platform or the region of the platform in which the users 14, 16 will be immersed. virtual virtual representation of the virtual environment 12. The computing unit 22 then collects in real time the data coming from each sensor 17, 18, 20 to create and position an avatar 38 corresponding to each user 14, 16 in the 12. For this purpose, for each user 14, 16, the application 34 transposes the data of the second sensor 20 to place them in the reference associated with the first sensor 18, and then transposes the data obtained again, as well as the data. data from the first sensor 18 in a reference of the virtual environment 12, common to all users.
The positioning module of the virtual head 60 uses the data of the position sensor 17 and the data of the first sensor 18 to orient the virtual head 50 of the avatar 38 and the marker 52 representative of the direction of vision.
The virtual trunk positioning module 54 and the virtual shoulders 58 then determines the position and orientation of the virtual trunk 54, and sets the orientation of the virtual shoulders 58, in a vertical plane whose orientation pivots in conjunction with the direction of rotation. vision around a vertical axis A-A 'passing through the virtual head 60. It then determines the position of each end 60 of a virtual shoulder as defined above.
Simultaneously, the virtual limb positioning module 59 determines the position and orientation of the user's hands and forearm 14, 16, from the second sensor 20.
The virtual member positioning module 59 then determines the position and the orientation of the virtual hand 62 and the second region 66 of the virtual member, up to the bend 68 situated at a predefined distance from the virtual hand 62. the position of the first region 64 of the virtual member 59 linearly connecting the end 60 of the virtual shoulder 58 to the elbow 68. At each moment, the position and orientation of each part of the avatar 38 corresponding to each user 14, 16 is thus obtained by the central unit 22 in a frame of reference common to each of the users 14, 16.
Then, according to the preferences selected by the second user 16, the control module of the display of the rendering application 40 provides the reproduction assembly 24 of at least one user 14, 16, a three-dimensional representation of the virtual environment 12, and the avatar or avatars 38 of one or more other users 14, 16.
The masking module masks the upper part of the user's own avatar 38, 14, as previously described, in particular the virtual head 50, and the virtual shoulders 58 to avoid interfering with the user's vision 14, 16 .
In addition, the masking module detects the physiologically impossible positions of the virtual hand 62 of each user 14, 16, based on the calculated length of the first region 64 of the virtual member 59, as previously described.
When physiologically impossible positions are detected, the display of the corresponding virtual member 59 is masked.
With the system 10 according to the invention, the users 14, 16 can evolve in the same virtual environment 12 by being represented in the form of an animated avatar 38.
Each user 14, 16 is able to observe the avatars of the other users 14, 16, which are correctly located in the virtual environment 12.
The provision of animated avatar 38, based on data of orientation of the head of the user and real position of a part of the user's members also makes it possible to follow the gestures of each of the users 14, 16 in the virtual environment 12 through their respective avatars 38.
This allows to organize a meeting between several users 14, 16, in a virtual environment 12, without necessarily that the users 14, 16 are located in the same place.
In addition, the animated avatars 38 allow at least one user 16 to track the position and gestures of another user 14 or a plurality of users 14 simultaneously.
Thus, users 14 can simulate simultaneously or individually maintenance operations and / or use of a platform and at least one user 16 is able to follow the operations performed.
The selection, for each user 14, 16, of the avatars 38 that the user 14, 16 can see increases the functionality of the system 10. It is thus possible for a user 16 to monitor and evaluate the movements of others. Users 14 simultaneously, allowing users 14 to designate equipment or circuits on the platform, without each user 14 can see the movements of other users 14.
The system 10 is further advantageously equipped with means making it possible to display information and / or selection windows in the virtual three-dimensional environment 12, and to select functions within these windows directly in the virtual environment 12.
In addition, the system 10 and the associated method make it possible to place a plurality of users 14 in the same confined region, whereas in reality such a region would be too narrow to accommodate all the users 14,16.
The perception of the other users 14, 16 via the animated avatars 38 is particularly rich, since each user 14, 16 can selectively observe the general direction of the head of each other user 14, 16, as well as the position of the hands and a representation overall. close to the position of the members of the user 14, 16.
In a variant, the system 10 includes a system for recording the movements of the avatar or avatars 38 in the virtual environment 12 over time, and a system for immersive replay or on a screen of the recorded data.
In another variant, the second user 16 is not represented by an avatar 38 in the virtual environment 12. It then does not necessarily carry a first sensor 18 or a second sensor 20.
In one variant, the control and rendering application 40 is capable of varying the level of transparency of each avatar 38 situated at a distance from a given user 14, 16, as a function of the distance separating this avatar 38 from the For example, if the avatar 18 of another user 14, 16 approaches the avatar 18 of the given user, the level of transparency increases, until the avatar 18 of the given user in the virtual environment 12. to make the avatar 18 of the other user 14, 16 completely transparent when the distance between the avatars is less than a defined distance, for example between 10 cm and 15 cm.
On the contrary, the level of transparency decreases as the avatars 18 move away.
权利要求:
Claims (15)
[1" id="c-fr-0001]
A virtual three-dimensional simulation system (10) for generating a virtual environment (12) comprising a plurality of users, comprising, for at least a first user (14), a first detection sensor (18) a viewing direction of the first user (14), - a computing unit (22) capable of generating a virtual three-dimensional simulation of the virtual environment (12), on the basis of the data received from the or each first sensor (18) detection; for at least one second user (16), an immersive restitution set (24) of the virtual three-dimensional simulation generated by the computing unit (22), capable of plunging the or each second user (16) into the three-dimensional simulation Virtual; characterized in that the system (10) comprises, for the or each first user (14), a second sensor (20) for detecting the position of a portion of a real member of the first user (14), the computing unit (22) being able to create, in virtual three-dimensional simulation, an avatar (38) of the or each first user (14), comprising at least one virtual head (50) and at least one virtual member (59) , reconstructed and oriented relative to one another based on the data of the first sensor (18) and the second sensor (20).
[2" id="c-fr-0002]
2. - System (10) according to claim 1, wherein the member and the virtual member (59) are arms respectively of the user and the avatar (38).
[3" id="c-fr-0003]
The system (10) of claim 2, wherein the portion of the user's member detected by the second sensor (20) comprises the hand of the first user.
[4" id="c-fr-0004]
4. - System (10) according to any preceding claim, wherein the computing unit (22) is adapted to determine the position of a first region (64) of the virtual member (59), based on data received from the first sensor (18) of detection, and is capable of determining the position of a second region (66) of the virtual member (59) from the data received from the second sensor (20) detection.
[5" id="c-fr-0005]
5. - System (10) according to claim 4, wherein the computing unit (22) is able to determine the position of the first region (64) of the virtual member (59) after determining the position of the second region (66) the virtual member (59).
[6" id="c-fr-0006]
6. - System (10) according to one of claims 4 or 5, taken in combination with one of claims 2 or 3, wherein the computing unit (22) is adapted to generate a representation of a shoulder virtual circuit (58) of the first user (14), movable jointly in rotation about a vertical axis (A-A ') with the virtual head (50) of the first user (14), the first region of the virtual member (59) extending from the end (60) of the virtual shoulder (58).
[7" id="c-fr-0007]
7. - System (10) according to any one of the preceding claims, comprising, for a plurality of first users (14), a first sensor (18) for detecting a direction of vision of the user (14, 16). ), and a second sensor (20) for detecting the position of a part of a member of the user (14, 16), the computing unit (22) being able to create, in the virtual three-dimensional simulation an avatar (38) of each first user (14), comprising at least one virtual head (50) and at least one virtual member (59), reconstructed and oriented relative to each other based on the data of the first sensor (18) and the second sensor (20) of the first user (14), the or each reproduction unit (24) being able to selectively show the avatar (38) of one or more first users (14) in virtual three-dimensional simulation.
[8" id="c-fr-0008]
8. - System (10) according to claim 7, wherein the computing unit (22) is adapted to place the avatars (38) of a plurality of first users (14) to the same location given in the three-dimensional simulation virtual, the or each restitution set (24) being adapted to selectively show the avatar (38) of a single first user (14) to the given location.
[9" id="c-fr-0009]
9. - System (10) according to any one of the preceding claims, comprising, for the or each first user (14), a set of immersive restitution (24) of the virtual three-dimensional simulation generated by the unit to plunge the or each first user (14) in the virtual three-dimensional simulation.
[10" id="c-fr-0010]
10. - System (10) according to claim 9, wherein the restitution assembly (24) is adapted to be carried by the head of the first user (14), the first sensor (18) and / or the second sensor ( 20) being mounted on the restitution assembly (24).
[11" id="c-fr-0011]
11. - System (10) according to any one of claims 9 to 10, wherein, in a given predetermined position of the part of a user member detected by the second sensor (20), the unit of calculation (22) is adapted to display at least one information and / or selection window (90) in the virtual three-dimensional simulation visible to the or each first user (14) and / or by the or each second user (16) .
[12" id="c-fr-0012]
12. - System (10) according to any one of the preceding claims, wherein the computing unit (22) is able to determine if the position of the part of the real member of the first user (14) detected by the second sensor (20) is physiologically possible and to hide the display of the virtual member (59) of the avatar (38) of the first user (14) if the position of the portion of the real member of the first user (14) detected by the second sensor (20) is not physiologically possible.
[13" id="c-fr-0013]
13. - System (10) according to any one of the preceding claims comprising, for the or each first user (14), a position sensor (17) adapted to provide the computing unit (22) geographical data positioning the first user (14).
[14" id="c-fr-0014]
14. - Process for producing a virtual three-dimensional simulation bringing together several users (14, 16), comprising the following steps: - providing a system (10) according to any one of claims 1 to 13; - activating the first sensor (18) and the second sensor (20) and transmitting the data received from the first sensor (18) and the second sensor (20) to the computing unit (22), - generating a three-dimensional simulation of an avatar (38) of the or each first user (14), comprising at least one virtual head (50) and at least one virtual member (59), reconstituted and oriented relative to each other on the database of the first sensor (18) and the second sensor (20).
[15" id="c-fr-0015]
15. - The method of claim 14, wherein the generation of the virtual three-dimensional simulation comprises the loading of a representative model of a platform, and the virtual three-dimensional representation of the virtual environment of a region of the platform, the or each first user (14) operating in the aircraft environment for performing at least one simulation of maintenance operation and / or use of the platform.
类似技术:
公开号 | 公开日 | 专利标题
FR3041804A1|2017-03-31|VIRTUAL THREE-DIMENSIONAL SIMULATION SYSTEM FOR GENERATING A VIRTUAL ENVIRONMENT COMPRISING A PLURALITY OF USERS AND ASSOCIATED METHOD
JP6643357B2|2020-02-12|Full spherical capture method
US20190122440A1|2019-04-25|Content display property management
EP3137976B1|2017-11-15|World-locked display quality feedback
US20210209857A1|2021-07-08|Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
CN105452935B|2017-10-31|The perception based on predicting tracing for head mounted display
US8953022B2|2015-02-10|System and method for sharing virtual and augmented reality scenes between users and viewers
US20170123488A1|2017-05-04|Tracking of wearer's eyes relative to wearable device
US20150317833A1|2015-11-05|Pose tracking an augmented reality device
CN105324811A|2016-02-10|Speech to text conversion
CN107810633A|2018-03-16|Three-dimensional rendering system
US20200363867A1|2020-11-19|Blink-based calibration of an optical see-through head-mounted display
CN109923509A|2019-06-21|The collaboration of object in virtual reality manipulates
EP3195593A1|2017-07-26|Device and method for orchestrating display surfaces, projection devices and 2d and 3d spatial interaction devices for creating interactive environments
EP3602253A1|2020-02-05|Transparency system for commonplace camera
CN112104595A|2020-12-18|Location-based application flow activation
US20200159875A1|2020-05-21|Experience driven development of mixed reality devices with immersive feedback
FR3077910A1|2019-08-16|METHOD FOR AIDING THE MAINTENANCE OF A COMPLEX SYSTEM
WO2017149254A1|2017-09-08|Man/machine interface with 3d graphics applications
US20200074725A1|2020-03-05|Systems and method for realistic augmented reality | lighting effects
US11087527B2|2021-08-10|Selecting an omnidirectional image for display
EP2823868A1|2015-01-14|Method for reproducing audiovisual content provided with parameters for controlling haptic actuator commands and device implementing the method
FR3097363A1|2020-12-18|Digital mission preparation system
WO2021205130A1|2021-10-14|System for connecting a surgical training device to a virtual device
WO2021089437A1|2021-05-14|Method for managing a display interface
同族专利:
公开号 | 公开日
FR3041804B1|2021-11-12|
US20170092223A1|2017-03-30|
CA2942652A1|2017-03-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20110154266A1|2009-12-17|2011-06-23|Microsoft Corporation|Camera navigation for presentations|
US8920172B1|2011-03-15|2014-12-30|Motion Reality, Inc.|Method and system for tracking hardware in a motion capture environment|
DE102012017700A1|2012-09-07|2014-03-13|Sata Gmbh & Co. Kg|System and method for simulating operation of a non-medical tool|
WO2015044851A2|2013-09-25|2015-04-02|Mindmaze Sa|Physiological parameter measurement and feedback system|FR3067848A1|2017-06-16|2018-12-21|Kpass Airport|METHOD FOR THE PRACTICAL TRAINING OF A TRACK AGENT USING A VIRTUAL ENVIRONMENT AND INSTALLATION FOR ITS IMPLEMENTATION|
CN113609599A|2021-10-09|2021-11-05|北京航空航天大学|Wall surface distance effective unit calculation method for aircraft turbulence flow-around simulation|US7626569B2|2004-10-25|2009-12-01|Graphics Properties Holdings, Inc.|Movable audio/video communication interface system|
US9728006B2|2009-07-20|2017-08-08|Real Time Companies, LLC|Computer-aided system for 360° heads up display of safety/mission critical data|
US9696795B2|2015-02-13|2017-07-04|Leap Motion, Inc.|Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments|GB2554914A|2016-10-14|2018-04-18|Vr Chitect Ltd|Virtual reality system and method|
US20180232132A1|2017-02-15|2018-08-16|Cae Inc.|Visualizing sub-systems of a virtual simulated element in an interactive computer simulation system|
KR102275520B1|2018-05-24|2021-07-12|티엠알더블유 파운데이션 아이피 앤드 홀딩 에스에이알엘|Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world|
US11115468B2|2019-05-23|2021-09-07|The Calany Holding S. À R.L.|Live management of real world via a persistent virtual world system|
US11196964B2|2019-06-18|2021-12-07|The Calany Holding S. À R.L.|Merged reality live event management system and method|
法律状态:
2016-09-05| PLFP| Fee payment|Year of fee payment: 2 |
2017-03-31| PLSC| Publication of the preliminary search report|Effective date: 20170331 |
2017-08-24| PLFP| Fee payment|Year of fee payment: 3 |
2018-08-24| PLFP| Fee payment|Year of fee payment: 4 |
2019-08-22| PLFP| Fee payment|Year of fee payment: 5 |
2020-08-12| PLFP| Fee payment|Year of fee payment: 6 |
2021-08-11| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
FR1501977A|FR3041804B1|2015-09-24|2015-09-24|VIRTUAL THREE-DIMENSIONAL SIMULATION SYSTEM SUITABLE TO GENERATE A VIRTUAL ENVIRONMENT GATHERING A PLURALITY OF USERS AND RELATED PROCESS|FR1501977A| FR3041804B1|2015-09-24|2015-09-24|VIRTUAL THREE-DIMENSIONAL SIMULATION SYSTEM SUITABLE TO GENERATE A VIRTUAL ENVIRONMENT GATHERING A PLURALITY OF USERS AND RELATED PROCESS|
CA2942652A| CA2942652A1|2015-09-24|2016-09-20|Three dimensional simulation system capable of creating a virtual environment uniting a plurality of users, and associated process|
US15/273,587| US20170092223A1|2015-09-24|2016-09-22|Three-dimensional simulation system for generating a virtual environment involving a plurality of users and associated method|
[返回顶部]